Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 69
Filtrar
1.
JMIR Res Protoc ; 13: e53627, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38441925

RESUMO

BACKGROUND: Complex and expanding data sets in clinical oncology applications require flexible and interactive visualization of patient data to provide the maximum amount of information to physicians and other medical practitioners. Interdisciplinary tumor conferences in particular profit from customized tools to integrate, link, and visualize relevant data from all professions involved. OBJECTIVE: The scoping review proposed in this protocol aims to identify and present currently available data visualization tools for tumor boards and related areas. The objective of the review will be to provide not only an overview of digital tools currently used in tumor board settings, but also the data included, the respective visualization solutions, and their integration into hospital processes. METHODS: The planned scoping review process is based on the Arksey and O'Malley scoping study framework. The following electronic databases will be searched for articles published in English: PubMed, Web of Knowledge, and SCOPUS. Eligible articles will first undergo a deduplication step, followed by the screening of titles and abstracts. Second, a full-text screening will be used to reach the final decision about article selection. At least 2 reviewers will independently screen titles, abstracts, and full-text reports. Conflicting inclusion decisions will be resolved by a third reviewer. The remaining literature will be analyzed using a data extraction template proposed in this protocol. The template includes a variety of meta information as well as specific questions aiming to answer the research question: "What are the key features of data visualization solutions used in molecular and organ tumor boards, and how are these elements integrated and used within the clinical setting?" The findings will be compiled, charted, and presented as specified in the scoping study framework. Data for included tools may be supplemented with additional manual literature searches. The entire review process will be documented in alignment with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) flowchart. RESULTS: The results of this scoping review will be reported per the expanded PRISMA-ScR guidelines. A preliminary search using PubMed, Web of Knowledge, and Scopus resulted in 1320 articles after deduplication that will be included in the further review process. We expect the results to be published during the second quarter of 2024. CONCLUSIONS: Visualization is a key process in leveraging a data set's potentially available information and enabling its use in an interdisciplinary setting. The scoping review described in this protocol aims to present the status quo of visualization solutions for tumor board and clinical oncology applications and their integration into hospital processes. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/53627.

2.
Sci Rep ; 14(1): 6391, 2024 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-38493266

RESUMO

The purpose of this feasibility study is to investigate if latent diffusion models (LDMs) are capable to generate contrast enhanced (CE) MRI-derived subtraction maximum intensity projections (MIPs) of the breast, which are conditioned by lesions. We trained an LDM with n = 2832 CE-MIPs of breast MRI examinations of n = 1966 patients (median age: 50 years) acquired between the years 2015 and 2020. The LDM was subsequently conditioned with n = 756 segmented lesions from n = 407 examinations, indicating their location and BI-RADS scores. By applying the LDM, synthetic images were generated from the segmentations of an independent validation dataset. Lesions, anatomical correctness, and realistic impression of synthetic and real MIP images were further assessed in a multi-rater study with five independent raters, each evaluating n = 204 MIPs (50% real/50% synthetic images). The detection of synthetic MIPs by the raters was akin to random guessing with an AUC of 0.58. Interrater reliability of the lesion assessment was high both for real (Kendall's W = 0.77) and synthetic images (W = 0.85). A higher AUC was observed for the detection of suspicious lesions (BI-RADS ≥ 4) in synthetic MIPs (0.88 vs. 0.77; p = 0.051). Our results show that LDMs can generate lesion-conditioned MRI-derived CE subtraction MIPs of the breast, however, they also indicate that the LDM tended to generate rather typical or 'textbook representations' of lesions.


Assuntos
Neoplasias da Mama , Meios de Contraste , Humanos , Pessoa de Meia-Idade , Feminino , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Mama/diagnóstico por imagem , Mama/patologia , Exame Físico , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Estudos Retrospectivos
3.
JMIR Form Res ; 7: e50027, 2023 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-38060305

RESUMO

BACKGROUND: Secondary investigations into digital health records, including electronic patient data from German medical data integration centers (DICs), pave the way for enhanced future patient care. However, only limited information is captured regarding the integrity, traceability, and quality of the (sensitive) data elements. This lack of detail diminishes trust in the validity of the collected data. From a technical standpoint, adhering to the widely accepted FAIR (Findability, Accessibility, Interoperability, and Reusability) principles for data stewardship necessitates enriching data with provenance-related metadata. Provenance offers insights into the readiness for the reuse of a data element and serves as a supplier of data governance. OBJECTIVE: The primary goal of this study is to augment the reusability of clinical routine data within a medical DIC for secondary utilization in clinical research. Our aim is to establish provenance traces that underpin the status of data integrity, reliability, and consequently, trust in electronic health records, thereby enhancing the accountability of the medical DIC. We present the implementation of a proof-of-concept provenance library integrating international standards as an initial step. METHODS: We adhered to a customized road map for a provenance framework, and examined the data integration steps across the ETL (extract, transform, and load) phases. Following a maturity model, we derived requirements for a provenance library. Using this research approach, we formulated a provenance model with associated metadata and implemented a proof-of-concept provenance class. Furthermore, we seamlessly incorporated the internationally recognized Word Wide Web Consortium (W3C) provenance standard, aligned the resultant provenance records with the interoperable health care standard Fast Healthcare Interoperability Resources, and presented them in various representation formats. Ultimately, we conducted a thorough assessment of provenance trace measurements. RESULTS: This study marks the inaugural implementation of integrated provenance traces at the data element level within a German medical DIC. We devised and executed a practical method that synergizes the robustness of quality- and health standard-guided (meta)data management practices. Our measurements indicate commendable pipeline execution times, attaining notable levels of accuracy and reliability in processing clinical routine data, thereby ensuring accountability in the medical DIC. These findings should inspire the development of additional tools aimed at providing evidence-based and reliable electronic health record services for secondary use. CONCLUSIONS: The research method outlined for the proof-of-concept provenance class has been crafted to promote effective and reliable core data management practices. It aims to enhance biomedical data by imbuing it with meaningful provenance, thereby bolstering the benefits for both research and society. Additionally, it facilitates the streamlined reuse of biomedical data. As a result, the system mitigates risks, as data analysis without knowledge of the origin and quality of all data elements is rendered futile. While the approach was initially developed for the medical DIC use case, these principles can be universally applied throughout the scientific domain.

4.
J Med Internet Res ; 25: e48809, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37938878

RESUMO

BACKGROUND: In the context of the Medical Informatics Initiative, medical data integration centers (DICs) have implemented complex data flows to transfer routine health care data into research data repositories for secondary use. Data management practices are of importance throughout these processes, and special attention should be given to provenance aspects. Insufficient knowledge can lead to validity risks and reduce the confidence and quality of the processed data. The need to implement maintainable data management practices is undisputed, but there is a great lack of clarity on the status. OBJECTIVE: Our study examines the current data management practices throughout the data life cycle within the Medical Informatics in Research and Care in University Medicine (MIRACUM) consortium. We present a framework for the maturity status of data management practices and present recommendations to enable a trustful dissemination and reuse of routine health care data. METHODS: In this mixed methods study, we conducted semistructured interviews with stakeholders from 10 DICs between July and September 2021. We used a self-designed questionnaire that we tailored to the MIRACUM DICs, to collect qualitative and quantitative data. Our study method is compliant with the Good Reporting of a Mixed Methods Study (GRAMMS) checklist. RESULTS: Our study provides insights into the data management practices at the MIRACUM DICs. We identify several traceability issues that can be partially explained with a lack of contextual information within nonharmonized workflow steps, unclear responsibilities, missing or incomplete data elements, and incomplete information about the computational environment information. Based on the identified shortcomings, we suggest a data management maturity framework to reach more clarity and to help define enhanced data management strategies. CONCLUSIONS: The data management maturity framework supports the production and dissemination of accurate and provenance-enriched data for secondary use. Our work serves as a catalyst for the derivation of an overarching data management strategy, abiding data integrity and provenance characteristics as key factors. We envision that this work will lead to the generation of fairer and maintained health research data of high quality.


Assuntos
Gerenciamento de Dados , Informática Médica , Humanos , Atenção à Saúde , Inquéritos e Questionários
5.
JMIR Res Protoc ; 12: e46471, 2023 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-37566443

RESUMO

BACKGROUND: The anonymization of Common Data Model (CDM)-converted EHR data is essential to ensure the data privacy in the use of harmonized health care data. However, applying data anonymization techniques can significantly affect many properties of the resulting data sets and thus biases research results. Few studies have reviewed these applications with a reflection of approaches to manage data utility and quality concerns in the context of CDM-formatted health care data. OBJECTIVE: Our intended scoping review aims to identify and describe (1) how formal anonymization methods are carried out with CDM-converted health care data, (2) how data quality and utility concerns are considered, and (3) how the various CDMs differ in terms of their suitability for recording anonymized data. METHODS: The planned scoping review is based on the framework of Arksey and O'Malley. By using this, only articles published in English will be included. The retrieval of literature items should be based on a literature search string combining keywords related to data anonymization, CDM standards, and data quality assessment. The proposed literature search query should be validated by a librarian, accompanied by manual searches to include further informal sources. Eligible articles will first undergo a deduplication step, followed by the screening of titles. Second, a full-text reading will allow the 2 reviewers involved to reach the final decision about article selection, while a domain expert will support the resolution of citation selection conflicts. Additionally, key information will be extracted, categorized, summarized, and analyzed by using a proposed template into an iterative process. Tabular and graphical analyses should be addressed in alignment with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) checklist. We also performed some tentative searches on Web of Science for estimating the feasibility of reaching eligible articles. RESULTS: Tentative searches on Web of Science resulted in 507 nonduplicated matches, suggesting the availability of (potential) relevant articles. Further analysis and selection steps will allow us to derive a final literature set. Furthermore, the completion of this scoping review study is expected by the end of the fourth quarter of 2023. CONCLUSIONS: Outlining the approaches of applying formal anonymization methods on CDM-formatted health care data while taking into account data quality and utility concerns should provide useful insights to understand the existing approaches and future research direction based on identified gaps. This protocol describes a schedule to perform a scoping review, which should support the conduction of follow-up investigations. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/46471.

6.
Biomedicines ; 11(5)2023 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-37239004

RESUMO

We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization schemes (tensor- or channel-wise) using float32 or int8 on publicly available (DIBaS, n = 660) and locally compiled (n = 8500) datasets. Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. The overall overview of performances including accuracy, inference time and model size was also visualized. Frames per second (FPS) of small models consistently surpassed their large counterparts by a factor of 1-2×. DeiT small was the fastest VT in int8 configuration (6.0 FPS). In conclusion, VTs consistently outperformed CNNs for Gram-stain classification in most settings even on smaller datasets.

7.
Biomedicines ; 10(11)2022 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-36359328

RESUMO

Despite the emergence of mobile health and the success of deep learning (DL), deploying production-ready DL models to resource-limited devices remains challenging. Especially, during inference time, the speed of DL models becomes relevant. We aimed to accelerate inference time for Gram-stained analysis, which is a tedious and manual task involving microorganism detection on whole slide images. Three DL models were optimized in three steps: transfer learning, pruning and quantization and then evaluated on two Android smartphones. Most convolutional layers (≥80%) had to be retrained for adaptation to the Gram-stained classification task. The combination of pruning and quantization demonstrated its utility to reduce the model size and inference time without compromising model quality. Pruning mainly contributed to model size reduction by 15×, while quantization reduced inference time by 3× and decreased model size by 4×. The combination of two reduced the baseline model by an overall factor of 46×. Optimized models were smaller than 6 MB and were able to process one image in <0.6 s on a Galaxy S10. Our findings demonstrate that methods for model compression are highly relevant for the successful deployment of DL solutions to resource-limited devices.

8.
J Med Internet Res ; 24(10): e38041, 2022 10 24.
Artigo em Inglês | MEDLINE | ID: mdl-36279164

RESUMO

BACKGROUND: Visual analysis and data delivery in the form of visualizations are of great importance in health care, as such forms of presentation can reduce errors and improve care and can also help provide new insights into long-term disease progression. Information visualization and visual analytics also address the complexity of long-term, time-oriented patient data by reducing inherent complexity and facilitating a focus on underlying and hidden patterns. OBJECTIVE: This review aims to provide an overview of visualization techniques for time-oriented data in health care, supporting the comparison of patients. We systematically collected literature and report on the visualization techniques supporting the comparison of time-based data sets of single patients with those of multiple patients or their cohorts and summarized the use of these techniques. METHODS: This scoping review used the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist. After all collected articles were screened by 16 reviewers according to the criteria, 6 reviewers extracted the set of variables under investigation. The characteristics of these variables were based on existing taxonomies or identified through open coding. RESULTS: Of the 249 screened articles, we identified 22 (8.8%) that fit all criteria and reviewed them in depth. We collected and synthesized findings from these articles for medical aspects such as medical context, medical objective, and medical data type, as well as for the core investigated aspects of visualization techniques, interaction techniques, and supported tasks. The extracted articles were published between 2003 and 2019 and were mostly situated in clinical research. These systems used a wide range of visualization techniques, most frequently showing changes over time. Timelines and temporal line charts occurred 8 times each, followed by histograms with 7 occurrences and scatterplots with 5 occurrences. We report on the findings quantitatively through visual summarization, as well as qualitatively. CONCLUSIONS: The articles under review in general mitigated complexity through visualization and supported diverse medical objectives. We identified 3 distinct patient entities: single patients, multiple patients, and cohorts. Cohorts were typically visualized in condensed form, either through prior data aggregation or through visual summarization, whereas visualization of individual patients often contained finer details. All the systems provided mechanisms for viewing and comparing patient data. However, explicitly comparing a single patient with multiple patients or a cohort was supported only by a few systems. These systems mainly use basic visualization techniques, with some using novel visualizations tailored to a specific task. Overall, we found the visual comparison of measurements between single and multiple patients or cohorts to be underdeveloped, and we argue for further research in a systematic review, as well as the usefulness of a design space.


Assuntos
Lista de Checagem , Atenção à Saúde , Humanos , Publicações
9.
Stud Health Technol Inform ; 293: 19-27, 2022 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-35592955

RESUMO

The academic research environment is characterized by self-developed, innovative, customized solutions, which are often free to use for third parties with open-source code and open licenses. On the other hand, they are maintained only to a very limited extent after the end of project funding. The ToolPool Gesundheitsforschung addresses the problem of finding ready to use solutions by building a registry of proven and supported tools, services, concepts and consulting offers. The goal is to provide an up-to-date selection of "relevant" solutions for a given domain that are immediately usable and that are actually used by third parties, rather than aiming at a complete list of all solutions which belong to that domain. Proof of relevance and usage must be provided, for example, by concrete application scenarios, experience reports by uninvolved third parties, references in publications or workshops held. Quality assurance is carried out for new entries by an agreed list of admission criteria, for existing entries at least once a year by a special task force. Currently, 79 solutions are represented, this number is to be significantly expanded by involving of new editors from current national funding initiatives in Germany.


Assuntos
Software , Estudos Epidemiológicos , Alemanha , Sistema de Registros
10.
BMC Med Imaging ; 22(1): 69, 2022 04 13.
Artigo em Inglês | MEDLINE | ID: mdl-35418051

RESUMO

BACKGROUND: Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS: 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS: The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION: The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Bases de Dados Factuais , Humanos
11.
Nervenarzt ; 93(3): 279-287, 2022 Mar.
Artigo em Alemão | MEDLINE | ID: mdl-33730181

RESUMO

BACKGROUND: Ward-equivalent treatment (StäB), a form of crisis resolution and home treatment in Germany, has been introduced in 2018 as a new model of mental health service delivery for people with an indication for inpatient care. The rapid progress in the field of information and communication technology offers entirely new opportunities for innovative digital mental health care, such as telemedicine, eHealth, or mHealth interventions. OBJECTIVE: This review aims to provide a comprehensive overview of novel digital forms of service delivery that may contribute to a personalized delivery of StäB and improving clinical and social outcomes as well as reducing direct and indirect costs. METHOD: This work is based on a narrative review. RESULTS: Four primary digital forms of service delivery have been identified that can be used for personalized delivery of StäB: (1) communication, continuity of care, and flexibility through online chat and video call; (2) monitoring of symptoms and behavior in real-time through ecological momentary assessment (EMA); (3) use of multimodal EMA data to generate and offer personalized feedback on subjective experience and behavioral patterns as well as (4) adaptive ecological momentary interventions (EMI) tailored to the person, moment, and context in daily life. CONCLUSION: New digital forms of service delivery have considerable potential to increase the effectiveness and cost-effectiveness of crisis resolution, home treatment, and assertive outreach. An important next step is to model and initially evaluate these novel digital forms of service delivery in the context of StäB and carefully investigate their quality from the user perspective, safety, feasibility, initial process and outcome quality as well as barriers and facilitators of implementation.


Assuntos
Avaliação Momentânea Ecológica , Telemedicina , Alemanha , Humanos
12.
JMIR Res Protoc ; 10(11): e31750, 2021 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-34813494

RESUMO

BACKGROUND: Provenance supports the understanding of data genesis, and it is a key factor to ensure the trustworthiness of digital objects containing (sensitive) scientific data. Provenance information contributes to a better understanding of scientific results and fosters collaboration on existing data as well as data sharing. This encompasses defining comprehensive concepts and standards for transparency and traceability, reproducibility, validity, and quality assurance during clinical and scientific data workflows and research. OBJECTIVE: The aim of this scoping review is to investigate existing evidence regarding approaches and criteria for provenance tracking as well as disclosing current knowledge gaps in the biomedical domain. This review covers modeling aspects as well as metadata frameworks for meaningful and usable provenance information during creation, collection, and processing of (sensitive) scientific biomedical data. This review also covers the examination of quality aspects of provenance criteria. METHODS: This scoping review will follow the methodological framework by Arksey and O'Malley. Relevant publications will be obtained by querying PubMed and Web of Science. All papers in English language will be included, published between January 1, 2006 and March 23, 2021. Data retrieval will be accompanied by manual search for grey literature. Potential publications will then be exported into a reference management software, and duplicates will be removed. Afterwards, the obtained set of papers will be transferred into a systematic review management tool. All publications will be screened, extracted, and analyzed: title and abstract screening will be carried out by 4 independent reviewers. Majority vote is required for consent to eligibility of papers based on the defined inclusion and exclusion criteria. Full-text reading will be performed independently by 2 reviewers and in the last step, key information will be extracted on a pretested template. If agreement cannot be reached, the conflict will be resolved by a domain expert. Charted data will be analyzed by categorizing and summarizing the individual data items based on the research questions. Tabular or graphical overviews will be given, if applicable. RESULTS: The reporting follows the extension of the Preferred Reporting Items for Systematic reviews and Meta-Analyses statements for Scoping Reviews. Electronic database searches in PubMed and Web of Science resulted in 469 matches after deduplication. As of September 2021, the scoping review is in the full-text screening stage. The data extraction using the pretested charting template will follow the full-text screening stage. We expect the scoping review report to be completed by February 2022. CONCLUSIONS: Information about the origin of healthcare data has a major impact on the quality and the reusability of scientific results as well as follow-up activities. This protocol outlines plans for a scoping review that will provide information about current approaches, challenges, or knowledge gaps with provenance tracking in biomedical sciences. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/31750.

14.
Front Oncol ; 11: 662013, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34249698

RESUMO

Prehabilitation has shown its potential for most intra-cavity surgery patients on enhancing preoperative functional capacity and postoperative outcomes. However, its large-scale implementation is limited by several constrictions, such as: i) unsolved practicalities of the service workflow, ii) challenges associated to change management in collaborative care; iii) insufficient access to prehabilitation; iv) relevant percentage of program drop-outs; v) need for program personalization; and, vi) economical sustainability. Transferability of prehabilitation programs from the hospital setting to the community would potentially provide a new scenario with greater accessibility, as well as offer an opportunity to effectively address the aforementioned issues and, thus, optimize healthcare value generation. A core aspect to take into account for an optimal management of prehabilitation programs is to use proper technological tools enabling: i) customizable and interoperable integrated care pathways facilitating personalization of the service and effective engagement among stakeholders; ii) remote monitoring (i.e. physical activity, physiological signs and patient-reported outcomes and experience measures) to support patient adherence to the program and empowerment for self-management; and, iii) use of health risk assessment supporting decision making for personalized service selection. The current manuscript details a proposal to bring digital innovation to community-based prehabilitation programs. Moreover, this approach has the potential to be adopted by programs supporting long-term management of cancer patients, chronic patients and prevention of multimorbidity in subjects at risk.

15.
Sci Rep ; 11(1): 5529, 2021 03 09.
Artigo em Inglês | MEDLINE | ID: mdl-33750857

RESUMO

Computer-assisted reporting (CAR) tools were suggested to improve radiology report quality by context-sensitively recommending key imaging biomarkers. However, studies evaluating machine learning (ML) algorithms on cross-lingual ontological (RadLex) mappings for developing embedded CAR algorithms are lacking. Therefore, we compared ML algorithms developed on human expert-annotated features against those developed on fully automated cross-lingual (German to English) RadLex mappings using 206 CT reports of suspected stroke. Target label was whether the Alberta Stroke Programme Early CT Score (ASPECTS) should have been provided (yes/no:154/52). We focused on probabilistic outputs of ML-algorithms including tree-based methods, elastic net, support vector machines (SVMs) and fastText (linear classifier), which were evaluated in the same 5 × fivefold nested cross-validation framework. This allowed for model stacking and classifier rankings. Performance was evaluated using calibration metrics (AUC, brier score, log loss) and -plots. Contextual ML-based assistance recommending ASPECTS was feasible. SVMs showed the highest accuracies both on human-extracted- (87%) and RadLex features (findings:82.5%; impressions:85.4%). FastText achieved the highest accuracy (89.3%) and AUC (92%) on impressions. Boosted trees fitted on findings had the best calibration profile. Our approach provides guidance for choosing ML classifiers for CAR tools in fully automated and language-agnostic fashion using bag-of-RadLex terms on limited expert-labelled training data.

16.
Thorax ; 76(4): 380-386, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33593931

RESUMO

BACKGROUND: Multiple breath washout (MBW) using sulfur hexafluoride (SF6) has the potential to reveal ventilation heterogeneity which is frequent in patients with obstructive lung disease and associated small airway dysfunction. However, reference data are scarce for this technique and mostly restricted to younger cohorts. We therefore set out to evaluate the influence of anthropometric parameters on SF6-MBW reference values in pulmonary healthy adults. METHODS: We evaluated cross-sectional data from 100 pulmonary healthy never-smokers and smokers (mean 51 (SD 20), range 20-88 years). Lung clearance index (LCI), acinar (Sacin) and conductive (Scond) ventilation heterogeneity were derived from triplicate SF6-MBW measurements. Global ventilation heterogeneity was calculated for the 2.5% (LCI2.5) and 5% (LCI5) stopping points. Upper limit of normal (ULN) was defined as the 95th percentile. RESULTS: Age was the only meaningful parameter influencing SF6-MBW parameters, explaining 47% (CI 33% to 59%) of the variance in LCI, 32% (CI 18% to 47%) in Sacin and 10% (CI 2% to 22%) in Scond. Mean LCI increases from 6.3 (ULN 7.4) to 8.8 (ULN 9.9) in subjects between 20 and 90 years. Smoking accounted for 2% (CI 0% to 8%) of the variability in LCI, 4% (CI 0% to 13%) in Sacin and 3% (CI 0% to 13%) in Scond. CONCLUSION: SF6-MBW outcome parameters showed an age-dependent increase from early adulthood to old age. The effect was most pronounced for global and acinar ventilation heterogeneity and smaller for conductive ventilation heterogeneity. No influence of height, weight and sex was seen. Reference values can now be provided for all important SF6-MBW outcome parameters over the whole age range. TRIAL REGISTRATION NUMBER: NCT04099225.


Assuntos
Antropometria , Testes Respiratórios , Pneumopatias Obstrutivas/fisiopatologia , Hexafluoreto de Enxofre/análise , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Estudos Transversais , Feminino , Voluntários Saudáveis , Humanos , Masculino , Pessoa de Meia-Idade , Valores de Referência , Testes de Função Respiratória , Fumantes
17.
medRxiv ; 2021 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-33564777

RESUMO

Objectives: To perform an international comparison of the trajectory of laboratory values among hospitalized patients with COVID-19 who develop severe disease and identify optimal timing of laboratory value collection to predict severity across hospitals and regions. Design: Retrospective cohort study. Setting: The Consortium for Clinical Characterization of COVID-19 by EHR (4CE), an international multi-site data-sharing collaborative of 342 hospitals in the US and in Europe. Participants: Patients hospitalized with COVID-19, admitted before or after PCR-confirmed result for SARS-CoV-2. Primary and secondary outcome measures: Patients were categorized as "ever-severe" or "never-severe" using the validated 4CE severity criteria. Eighteen laboratory tests associated with poor COVID-19-related outcomes were evaluated for predictive accuracy by area under the curve (AUC), compared between the severity categories. Subgroup analysis was performed to validate a subset of laboratory values as predictive of severity against a published algorithm. A subset of laboratory values (CRP, albumin, LDH, neutrophil count, D-dimer, and procalcitonin) was compared between North American and European sites for severity prediction. Results: Of 36,447 patients with COVID-19, 19,953 (43.7%) were categorized as ever-severe. Most patients (78.7%) were 50 years of age or older and male (60.5%). Longitudinal trajectories of CRP, albumin, LDH, neutrophil count, D-dimer, and procalcitonin showed association with disease severity. Significant differences of laboratory values at admission were found between the two groups. With the exception of D-dimer, predictive discrimination of laboratory values did not improve after admission. Sub-group analysis using age, D-dimer, CRP, and lymphocyte count as predictive of severity at admission showed similar discrimination to a published algorithm (AUC=0.88 and 0.91, respectively). Both models deteriorated in predictive accuracy as the disease progressed. On average, no difference in severity prediction was found between North American and European sites. Conclusions: Laboratory test values at admission can be used to predict severity in patients with COVID-19. Prediction models show consistency across international sites highlighting the potential generalizability of these models.

18.
Appl Clin Inform ; 12(1): 17-26, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33440429

RESUMO

BACKGROUND: Even though clinical trials are indispensable for medical research, they are frequently impaired by delayed or incomplete patient recruitment, resulting in cost overruns or aborted studies. Study protocols based on real-world data with precisely expressed eligibility criteria and realistic cohort estimations are crucial for successful study execution. The increasing availability of routine clinical data in electronic health records (EHRs) provides the opportunity to also support patient recruitment during the prescreening phase. While solutions for electronic recruitment support have been published, to our knowledge, no method for the prioritization of eligibility criteria in this context has been explored. METHODS: In the context of the Electronic Health Records for Clinical Research (EHR4CR) project, we examined the eligibility criteria of the KATHERINE trial. Criteria were extracted from the study protocol, deduplicated, and decomposed. A paper chart review and data warehouse query were executed to retrieve clinical data for the resulting set of simplified criteria separately from both sources. Criteria were scored according to disease specificity, data availability, and discriminatory power based on their content and the clinical dataset. RESULTS: The study protocol contained 35 eligibility criteria, which after simplification yielded 70 atomic criteria. For a cohort of 106 patients with breast cancer and neoadjuvant treatment, 47.9% of data elements were captured through paper chart review, with the data warehouse query yielding 26.9% of data elements. Score application resulted in a prioritized subset of 17 criteria, which yielded a sensitivity of 1.00 and specificity 0.57 on EHR data (paper charts, 1.00 and 0.80) compared with actual recruitment in the trial. CONCLUSION: It is possible to prioritize clinical trial eligibility criteria based on real-world data to optimize prescreening of patients on a selected subset of relevant and available criteria and reduce implementation efforts for recruitment support. The performance could be further improved by increasing EHR data coverage.


Assuntos
Pesquisa Biomédica , Registros Eletrônicos de Saúde , Ensaios Clínicos como Assunto , Eletrônica , Humanos , Seleção de Pacientes , Projetos de Pesquisa
20.
Cancers (Basel) ; 12(9)2020 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-32825612

RESUMO

Computer-aided diagnosis (CADx) approaches could help to objectify reporting on prostate mpMRI, but their use in many cases is hampered due to common-built algorithms that are not publicly available. The aim of this study was to develop an open-access CADx algorithm with high accuracy for classification of suspicious lesions in mpMRI of the prostate. This retrospective study was approved by the local ethics commission, with waiver of informed consent. A total of 124 patients with 195 reported lesions were included. All patients received mpMRI of the prostate between 2014 and 2017, and transrectal ultrasound (TRUS)-guided and targeted biopsy within a time period of 30 days. Histopathology of the biopsy cores served as a standard of reference. Acquired imaging parameters included the size of the lesion, signal intensity (T2w images), diffusion restriction, prostate volume, and several dynamic parameters along with the clinical parameters patient age and serum PSA level. Inter-reader agreement of the imaging parameters was assessed by calculating intraclass correlation coefficients. The dataset was stratified into a train set and test set (156 and 39 lesions in 100 and 24 patients, respectively). Using the above parameters, a CADx based on an Extreme Gradient Boosting algorithm was developed on the train set, and tested on the test set. Performance optimization was focused on maximizing the area under the Receiver Operating Characteristic curve (ROCAUC). The algorithm was made publicly available on the internet. The CADx reached an ROCAUC of 0.908 during training, and 0.913 during testing (p = 0.93). Additionally, established rule-in and rule-out criteria allowed classifying 35.8% of the malignant and 49.4% of the benign lesions with error rates of <2%. All imaging parameters featured excellent inter-reader agreement. This study presents an open-access CADx for classification of suspicious lesions in mpMRI of the prostate with high accuracy. Applying the provided rule-in and rule-out criteria might facilitate to further stratify the management of patients at risk.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA